-
Notifications
You must be signed in to change notification settings - Fork 44
DNS servers should have NS and SOA records #8047
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
plus rustfmt, clippy
842455b
to
f349290
Compare
f349290
to
fa47ab1
Compare
impl From<Srv> for DnsRecord { | ||
fn from(srv: Srv) -> Self { | ||
DnsRecord::Srv(srv) | ||
} | ||
} | ||
|
||
#[derive( | ||
Clone, | ||
Debug, | ||
Serialize, | ||
Deserialize, | ||
JsonSchema, | ||
PartialEq, | ||
Eq, | ||
PartialOrd, | ||
Ord, | ||
)] | ||
pub struct Srv { | ||
pub prio: u16, | ||
pub weight: u16, | ||
pub port: u16, | ||
pub target: String, | ||
} | ||
|
||
impl From<v1::config::Srv> for Srv { | ||
fn from(other: v1::config::Srv) -> Self { | ||
Srv { | ||
prio: other.prio, | ||
weight: other.weight, | ||
port: other.port, | ||
target: other.target, | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the other option here is to use the v1::config::Srv
type directly in v2, because it really has not changed. weaving the V1/V2 types together seems more difficult to think about generally, but i'm very open to the duplication being more confusing if folks feel that way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would probably use the v1 types directly but I can see going either way.
RSS doesn't set up external DNS at initial config time, that still happens when the recovery silo is created in Nexus after handoff..
just felt that name implied more than it should
dev-tools/reconfigurator-cli/tests/output/cmds-set-remove-mupdate-override-stdout
Outdated
Show resolved
Hide resolved
@@ -4,9 +4,12 @@ load-example --seed test_expunge_newly_added_external_dns | |||
|
|||
blueprint-show 3f00b694-1b16-4aaa-8f78-e6b3a527b434 | |||
blueprint-edit 3f00b694-1b16-4aaa-8f78-e6b3a527b434 expunge-zone 9995de32-dd52-4eb1-b0eb-141eb84bc739 | |||
blueprint-diff 3f00b694-1b16-4aaa-8f78-e6b3a527b434 366b0b68-d80e-4bc1-abd3-dc69837847e0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unfortunately, between the diff size and having conflicting changes on main, i had a hard time keeping the output a more legible "file moved and now has some additional lines". instead, git shows the diff as a fully new file even though it's mostly the prior content.
blueprint-diff
includes the DNS output though, which is of course what i actually care about here. if this is a bear to review (and i'm pretty empathetic to it being a lot) i'm open to moving the DNS checking over to a new test and leaving this unchanged, or moving the internal DNS testing to live in this test as well.
|
||
blueprint-show 62422356-97cd-4e0f-bd17-f946c25193c1 | ||
blueprint-edit 62422356-97cd-4e0f-bd17-f946c25193c1 expunge-zone 3fc76516-d258-48bc-b25e-9fca5e37c888 | ||
blueprint-diff 62422356-97cd-4e0f-bd17-f946c25193c1 14b8ff1c-91ff-4ab7-bb64-3c0f5f642e09 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this one surprised me and i've added this diff to reiterate that for testing: internal DNS zones are not replaced simply as a result of being expunged, since we might need to reuse the IP that server was listening on. for internal DNS in particular, the expunged zone must be ready_for_cleanup
. i don't know concretely what that means (sled-agent did a collection and saw the zone is gone?), but that's a critical step in actually seeing DNS changes in the diff below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't know concretely what that means (sled-agent did a collection and saw the zone is gone?)
Almost! Reconfigurator will mark a zone ready for cleanup during planning if in the most recent inventory collection, sled-agent reported:
- the zone is gone
- the generation of the sled's config is >= the generation in which the zone was expunged (to avoid a race where the zone is gone because it hasn't even started yet)
on one hand: now that DNS servers are referenced by potentially two different AAAA records, both of those records are potentially the target of a SRV record. though, we don't have SRV records for the DNS interface. this test had failed at first because we'd find a DNS server's IP via the `ns1.` record, which means we'd miss that the same zone was referenced by an AAAA record for the illumos zone UUID. on the other hand: #[nexus_test] environments involve a mock of the initial RSS environment construction? so now that the first blueprint adds NS records, this mock RSS environment was out of date, and a test that the first blueprint after "RSS" makes no change failed because the "RSS" environment was wrong.
each name has a list of records so calling the high-level collection "records" makes for some confusing words
this is probably the more exciting part of the issues outlined in #6944. the changes here get us to the point that for both internal and external DNS, we have:
ns1.<zone>
,ns2.<zone>
, ...)ns*.<zone>
described aboveoxide.internal
(for internal DNS) and$delegated_domain
(for external DNS)we do not support zone transfers here. i believe the SOA record here would be reasonable to guide zone transfers if we did, but obviously that's not something i've tested.
SOA fields
the SOA record's
RNAME
is hardcoded toadmin@<zone_name>
. this is out of expediency to provide something, but it's probably wrong most of the time. there's no way to get an MX record installed for<zone_name>
in the rack's external DNS servers, so barring DNS hijinks in the deployed environment, this will be a dead address. problems here are:it seems like the best answer here is to allow configuration of the rack's delegated domain and zone after initial setup, and being able to update an administrative email would fit in pretty naturally there. but we don't have that right now, so
admin@
it is. configuration of external DNS is probably more important in the context of zone transfers and permitting a list of remote addresses to whom we're willing to permit zone transfers. so it feels like this is in the API's future at some point.bonus
one minorly interesting observation along the way is that external DNS servers in particular are reachable at a few addresses - whichever public address they get in the rack's internal address range, and whichever address they get in the external address range. the public address is what's used for A/AAAA records. so, if you're looking around from inside a DNS zone you can get odd-looking answers like: